Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Analysis of three-time-slot P-persistent CSMA protocol with variable collision duration in wireless sensor network
LI Mingliang, DING Hongwei, LI Bo, WANG Liqing, BAO Liyong
Journal of Computer Applications    2020, 40 (7): 2038-2045.   DOI: 10.11772/j.issn.1001-9081.2019112028
Abstract298)      PDF (4238KB)(243)       Save
Random multiple access communication is an indispensable part of computer communication research. A three-slot P-Persistent Carrier Sense Multiple Access (P-CSMA) protocol with variable collision duration in Wireless Sensor Network (WSN) was proposed to solve the problem of traditional P-CSMA protocol in transmitting and controlling WSN and energy consumption of system. In this protocol, the collision duration was added to the traditional two-time-slot P-CSMA protocol in order to change the system model to three-time-slot model, that is, the duration of information packet being sent successfully, the duration of packet collision and the idle duration of the system.Through the modeling, the throughput, collision rate and idle rate of the system under this model were analyzed. It was found that by changing the collision duration, the loss of the system was reduced. Compared with the traditional P-CSMA protocol, this protocol makes the system performance improved, and makes the lifetime of the system nodes obtained based on the battery model obviously extended. Through the analysis, the system simulation flowchart of this protocol is obtained. Finally, by comparing and analyzing the theoretical values and simulation values of different indexes, the correctness of the theoretical derivation is proved.
Reference | Related Articles | Metrics
Prediction of protein subcellular localization based on deep learning
WANG Yihao, DING Hongwei, LI Bo, BAO Liyong, ZHANG Yingjie
Journal of Computer Applications    2020, 40 (11): 3393-3399.   DOI: 10.11772/j.issn.1001-9081.2020040510
Abstract419)      PDF (678KB)(454)       Save
Focused on the issue that traditional machine learning algorithms still need to manually represent features, a protein subcellular localization algorithm based on the deep network of Stacked Denoising AutoEncoder (SDAE) was proposed. Firstly, the improved Pseudo-Amino Acid Composition (PseAAC), Pseudo Position Specific Scoring Matrix (PsePSSM) and Conjoint Traid (CT) were used to extract the features of the protein sequence respectively, and the feature vectors obtained by these three methods were fused to obtain a new feature expression model of protein sequence. Secondly, the fused feature vector was input into the SDAE deep network to automatically learn more effective feature representation. Thirdly, the Softmax regression classifier was adopted to make the classification and prediction of subcells, and leave-one-out cross validation was performed on Viral proteins and Plant proteins datasets. Finally, the results of the proposed algorithm were compared with those of the existing algorithms such as mGOASVM (multi-label protein subcellular localization based on Gene Ontology and Support Vector Machine) and HybridGO-Loc (mining Hybrid features on Gene Ontology for predicting subcellular Localization of multi-location proteins). Experimental results show that the new algorithm achieves 98.24% accuracy on Viral proteins dataset, which is 9.35 Percentage Points higher than that of mGOASVM algorithm. And the new algorithm achieves 97.63% accuracy on Plant proteins dataset, which is 10.21 percentage points and 4.07 percentage points higher than those of mGOASVM algorithm and HybridGO-Loc algorithm respectively. To sum up, it can be shown that the proposed new algorithm can effectively improve the accuracy of the prediction of protein subcellular localization.
Reference | Related Articles | Metrics
Application of KNN algorithm based on value difference metric and clustering optimization in bank customer behavior prediction
LI Bo, ZHANG Xiao, YAN Jingyi, LI Kewei, LI Heng, LING Yulong, ZHANG Yong
Journal of Computer Applications    2019, 39 (9): 2784-2788.   DOI: 10.11772/j.issn.1001-9081.2019030571
Abstract463)      PDF (806KB)(453)       Save

In order to improve the accuracy of loan financial customer behavior prediction, aiming at the incomplete problem of dealing with non-numerical factors in data analysis of traditional K-Nearest Neighbors (KNN) algorithm, an improved KNN algorithm based on Value Difference Metric (VDM) distance and iterative optimization of clustering results was proposed. Firstly the collected data were clustered by KNN algorithm based on VDM distance, then the clustering results were analyzed iteratively, finally the prediction accuracy was improved through joint training. Based on the customer data collected by Portuguese retail banks from 2008 to 2013, it can be seen that compared with traditional KNN algorithm, FCD-KNN (Feature Correlation Difference KNN) algorithm, Gauss Naive Bayes algorithm, Gradient Boosting algorithm, the improved KNN algorithm has better performance and stability, and has great application value in the customer behavior prediction from bank data.

Reference | Related Articles | Metrics
Network abnormal behavior detection model based on adversarially learned inference
YANG Hongyu, LI Bochao
Journal of Computer Applications    2019, 39 (7): 1967-1972.   DOI: 10.11772/j.issn.1001-9081.2018112302
Abstract432)      PDF (908KB)(239)       Save

In order to solve the problem of low recall rate caused by data imbalance in network abnormal behavior detection, a network abnormal behavior detection model based on Adversarially Learned Inference (ALI) was proposed. Firstly, the feature items represented by discrete data in a dataset were removed, and the processed dataset was normalized to improve the convergence speed and accuracy of the model. Then, an improved ALI model was proposed and trained by ALI training algorithm with a dataset only consisting of positive samples, and the improved ALI model which had been trained was used to process the detection data to generate the processed detection dataset. Finally, the distance between detection data and the processed detection data was calculated based on abnormality detection function to determine whether the data was abnormal. The experimental results show that compared with One-Class Support Vector Machine (OC-SVM), Deep Structured Energy Based Model (DSEBM), Deep Autoencoding Gaussian Mixture Model (DAGMM) and Anomaly detection model with Generative Adversarial Network (AnoGAN), the accuracy of the proposed model is improved by 5.8-17.4 percentage points, the recall rate is increased by 1.4-31.4 percentage points, and the F1 value is increased by 14.18-19.7 percentage points. It can be seen that the network abnormal behavior detection model based on ALI has high recall rate and detection accuracy when the data is unbalanced.

Reference | Related Articles | Metrics
Person re-identification based on Siamese network and bidirectional max margin ranking loss
QI Ziliang, QU Hanbing, ZHAO Chuanhu, DONG Liang, LI Bozhao, WANG changsheng
Journal of Computer Applications    2019, 39 (4): 977-983.   DOI: 10.11772/j.issn.1001-9081.2018091889
Abstract679)      PDF (1221KB)(343)       Save
Focusing on the low accuracy of person re-identification caused by that the similarity between different pedestrians' images is more than that between the same pedestrians' images in reality, a person re-identification method based on Siamese network combined with identification loss and bidirectional max margin ranking loss was proposed. Firstly, a neural network model which was pre-trained on a huge dataset, especially its final full-connected layer was structurally modified so that it can output correct results on the person re-identification dataset. Secondly, training of the network on the training set was supervised by the combination of identification loss and ranking loss. And according to that the difference between the similarity of the positive and negative sample pairs is greater than the predetermined value, the distance between negative sample pair was made to be larger than that of positive sample pair. Finally, a trained neural network model was used to test on the test set, extracting features and comparing the cosine similarity between the features. Experimental result on the open datasets Market-1501, CUHK03 and DukeMTMC-reID show that rank-1 recognition rates of the proposed method reach 89.4%, 86.7%, and 77.2% respectively, which are higher than those of other classical methods. Moreover, the proposed method can achieve a rank-1 rate improvement of up to 10.04% under baseline network structure.
Reference | Related Articles | Metrics
New 3D scene modeling language and environment based on BNF paradigm
XU Xiaodan, LI Bingjie, LI Bosen, LYU Shun
Journal of Computer Applications    2018, 38 (9): 2666-2672.   DOI: 10.11772/j.issn.1001-9081.2018030552
Abstract670)      PDF (1259KB)(360)       Save
Due to the problems of high degree of business coupling, insufficient description ability to object attributes and characteristics of complex scenes in the existing Three-Dimensional (3D) scene modeling models, a new scene modeling language and environment based on BNF (Backus-Naur Form) was proposed to solve the problems of 3D virtual sacrifice scene modeling. Firstly, the 3D concepts of scene object, scene object template and scene object template attribute were introduced to analyze the constitutional features of the 3D virtual sacrifice scene in detail. Secondly, a 3D scene modeling language with loose coupling, strong attribute description capability and flexible generality was proposed. Then, the operations of the scene modeling language were designed, so that the language could be edited by Application Programming Interface (API) calls, and the language supported the interface modeling. Finally, a set of Extensible Markup Language (XML) mapping methods was defined for the language. It made the scene modeling results stored in XML text format, improved the reusability of modeling results, and demonstrated the application of modeling. The application results show that the method enhances the support of new data type features, and improves the description of sequence attributes and structure attribute types, and improves the description capabilities, the versatility, the flexibility of complex scenes. The proposed method is superior to the method proposed by SHU et al. (SHU B, QIU X J, WANG Z Q. Survey of shape from image. Journal of Computer Research and Development, 2010, 47(3):549-560), and solves the problem of 3D virtual sacrifice scene modeling. The proposed method is also suitable for modeling 3D scenes with low granularity, multiple attribute components, and high coupling degree, and can improve modeling efficiency.
Reference | Related Articles | Metrics
Image deep convolution classification method based on complex network description
HONG Rui, KANG Xiaodong, GUO Jun, LI Bo, WANG Yage, ZHANG Xiufang
Journal of Computer Applications    2018, 38 (12): 3399-3402.   DOI: 10.11772/j.issn.1001-9081.2018051041
Abstract342)      PDF (692KB)(461)       Save
In order to improve the accuracy of image classification with convolution network model without increasing more computation, a new image deep convolution classification method based on complex network description was proposed. Firstly, the complex network model degree matrices under different thresholds were obtained by using complex network description of image. Then, the feature vector was obtained by deep convolution neural networks based on degree matrix description of image. Finally, the obtained feature vectors were used for image K-Nearest Neighbors ( KNN) classification. The verification experiments were carried out on the ImageNet Large Scale Visual Recognition Challenge 2014 (ILSVRC2014) database. The experimental results show that the proposed model has higher accuracy and fewer iterations.
Reference | Related Articles | Metrics
Probabilistic soft logic reasoning model with semi-automatic rule learning
ZHANG Jia, ZHANG Hui, ZHAO Xujian, YANG Chunming, LI Bo
Journal of Computer Applications    2018, 38 (11): 3144-3149.   DOI: 10.11772/j.issn.1001-9081.2018041308
Abstract700)      PDF (1047KB)(551)       Save
Probabilistic Soft Logic (PSL), as a kind of declarative rule-based probability model, has strong extensibility and multi-domain adaptability. So far, it requires a lot of common sense and domain knowledge as preconditions for rule establishment. The acquisition of these knowledge is often very expensive and the incorrect information contained therein may reduce the correctness of reasoning. In order to alleviate this dilemma, the C5.0 algorithm and probabilistic soft logic were combined to make the data and knowledge drive the reasoning model together, and a semi-automatic learning method was proposed. C5.0 algorithm was used to extract rules, and artificial rules and optimized adjusted rules were supplemented as improved probabilistic soft logic input. The experimental results show that the proposed method has higher accuracy than the C5.0 algorithm and the PSL without rule learning on student performance prediction. Compared with the past method with pure hand-defined rules, the proposed method can significantly reduce the manual costs. Compared with Bayesian Network (BN), Support Vector Machine (SVM) and other algorithms, the proposed method also shows good results.
Reference | Related Articles | Metrics
Task scheduling algorithm for cloud computing based on multi-scale quantum harmonic oscillator algorithm
HAN Hu, WANG Peng, CHENG Kun, LI Bo
Journal of Computer Applications    2017, 37 (7): 1888-1892.   DOI: 10.11772/j.issn.1001-9081.2017.07.1888
Abstract515)      PDF (777KB)(396)       Save
Reasonable virtual machine allocating and efficient task scheduling is a key problem in the cloud computing. In order to better use virtual machine and make the system meet the service requests efficiently, a task scheduling algorithm based on Multi-scale Quantum Harmonic Oscillator Algorithm (MQHOA) was proposed. Firstly, each scheduling scheme was regarded as a sampling position, and then the randomness of Gaussian sampling was used to search the local optimal solution at the current scale. Then, whether the energy level was stable was judged. If the energy level was stable, it would enter the descent process and the worst scheduling scheme would be replaced. Finally, when the algorithm entered the process of scale reduction, the algorithm transitioned from global search to local search, eventually terminated and delivered the optimal result after several iterations. The simulation experiment results on CloudSim platform show that the makespan of task scheduling of MQHOA decreased by more than 10% and the degree of imbalance fell more than 0.4 in comparison with First Come First Serviced (FCFS) algorithm and Particle Swarm Optimization (PSO) algorithm. The experimental results show that the proposed algorithm has fast convergence rate and good characteristics of global convergence and adaptability. The task scheduling algorithm based on MQHOA can reduce the makespan of task scheduling and maintain the load balance of virtual machines in cloud computing.
Reference | Related Articles | Metrics
Real-time vehicle monitoring algorithm for single-lane based on DSP
YANG Ting, LI Bo, SHI Wenjing, ZHANG Chengfei
Journal of Computer Applications    2017, 37 (2): 593-596.   DOI: 10.11772/j.issn.1001-9081.2017.02.0593
Abstract487)      PDF (621KB)(532)       Save
The traditional traffic flow detection system based on sensor device has complex hardware equipment and the universal traffic flow detection algorithm cannot distinguish the directions of vehicles. To resolve the above problems, a real-time vehicle monitoring algorithm for single-lane based on Digital Signal Processor (DSP) was proposed and applied to parking lot. Firstly, the background differential algorithm was used to detect vehicles on virtual detection zone and the method of mean background modeling was improved. Then, an adjacent frame two-value classification algorithm was proposed to distinguish the directions of vehicles. Finally, virtual detection zone was used for vehicle counting and the number of empty parking spots was real-time displayed on a Light Emitting Diode (LED) screen. The feasibility of the proposed algorithm was verified by the simulation experiment. The actual test results showed that the accuracy rate of the adjacent frame two-value classification algorithm for direction detection was 96.5% and the accuracy rate of parking spot monitoring algorithm was 92.2%. The proposed real-time vehicle monitoring algorithm for single-lane has high accuracy and needs less detection equipment, so it can be applied to single-lane parking lot for real-time vehicle monitoring.
Reference | Related Articles | Metrics
Improved particle swarm optimization algorithm for support vector machine feature selection and optimization of parameters
ZHANG Jin, DING Sheng, LI Bo
Journal of Computer Applications    2016, 36 (5): 1330-1335.   DOI: 10.11772/j.issn.1001-9081.2016.05.1330
Abstract584)      PDF (936KB)(559)       Save
In view of feature selection and parameter optimization in Support Vector Machine (SVM) have great impact on the classification accuracy, an improved algorithm based on Particle Swarm Optimization (PSO) for SVM feature selection and parameter optimization (GPSO-SVM) was proposed to improve the classification accuracy and select the number of features as little as possible. In order to solve the problem that the traditional particle swarm algorithm was easy to fall into local optimum and premature maturation, the crossover and mutation operator were introduced from Genetic Algorithm (GA) that allows the particle to carry out cross and mutation operations after iteration and update to avoid the problem in PSO. The cross matching between particles was determined by the non-correlation index between particles and the mutation probability was determined by the fitness value, thereby new particles was generated into the group. By this way, the particles jump out of the previous search to the optimal position to improve the diversity of the population and to find a better value. Experiments were carried out on different data sets, compared with the feature selection and SVM parameters optimization algorithm based on PSO and GA, the accuracy of GPSO-SVM is improved by an average of 2% to 3%, and the number of selected features is reduced by 3% to 15%. The experimental result show that the features selection and parameter optimization of the proposed algorithm are better.
Reference | Related Articles | Metrics
Particle swarm optimization algorithm based on multi-strategy synergy
LI Jun, WANG Chong, LI Bo, FANG Guokang
Journal of Computer Applications    2016, 36 (3): 681-686.   DOI: 10.11772/j.issn.1001-9081.2016.03.681
Abstract597)      PDF (820KB)(539)       Save
Aiming at the shortage that Particle Swarm Optimization (PSO) algorithm is easy to fall into local optima and has low precision at later evolution process, a modified Multi-Strategies synergy PSO (MSPSO) algorithm was proposed. Firstly, a probability threshold value of 0.3 was set. In every iteration, if the randomly generated probability value was less than the threshold, the algorithm with opposition-based learning for the best individual was adopted to generate their opposite solutions, which improved the convergence speed and precision of PSO; otherwise, Gaussian mutation strategy was adopted for the particle position to enhance the diversity of population. Secondly, a Cauchy mutation strategy for linearly decreasing cauchy distribution scale parameter decreased was proposed, to generate better solution to guide the particle to approximate the optimum space. Finally, the simulation experiments were conducted on eight benchmark functions. MSPSO algorithm has the convergence mean value of 1.68E+01, 2.36E-283, 8.88E-16, 2.78E-05, 8.88E-16, respectively in Rosenbrock, Schwefel's P2.22, Rotated Ackley, Quadric Noise and Ackley, and can converge to the optimal solution of 0 in Sphere, Griewank and Rastrigin, which is better than GDPSO (PSO based on Gaussian Disturbance) and GOPSO (PSO based on global best Cauchy mutation and Opposition-based learning). The results show that proposed algorithm has higher convergence accuracy and can effectively avoid being trapped in local optimal solution.
Reference | Related Articles | Metrics
User sentiment model oriented to product attribute
JIA Wenjun, ZHANG Hui, YANG Chunming, ZHAO Xujian, LI Bo
Journal of Computer Applications    2016, 36 (1): 175-180.   DOI: 10.11772/j.issn.1001-9081.2016.01.0175
Abstract700)      PDF (903KB)(470)       Save
The traditional sentiment model faces two main problems in analyzing user's emotion of product reviews: 1) the lack of fine-grained emotion analysis for product attributes; 2) the number of product attributes shall be defined in advance. In order to alleviate the problems mentioned above, a fine-grained model for product attributes named User Sentiment Model (USM) was proposed. Firstly, the entities were clustered in product attributes by Hierarchical Dirichlet Processes (HDP) and the number of product attributes could be obtained automatically. Then, the combination of the entity weight in product attributes, the evaluation phrase of product attributes and sentiment lexicon was considered as prior. Finally, Latent Dirichlet Allocation (LDA) was used to classify the emotion of product attributes. The experimental results show that the model achieves a high accuracy in sentiment classification and the average accuracy rate of sentiment classification is 87%. Compared with the traditional sentiment model, the proposed model obtains higher accuracy on extracting product attributes as well as sentiment classification of evaluation phrases.
Reference | Related Articles | Metrics
Personalized book recommendation algorithm based on topic model
ZHENG Xiangyun, CHEN Zhigang, HUANG Rui, LI Bo
Journal of Computer Applications    2015, 35 (9): 2569-2573.   DOI: 10.11772/j.issn.1001-9081.2015.09.2569
Abstract579)      PDF (762KB)(18353)       Save
Concerning the problem of high time complexity of traditional recommendation algorithms, a new recommendation model based on Latent Dirichlet Allocation (LDA) model was proposed. It was a data mining model applied to Book Recommendation (BR) in library management systems, named Book Recommendation_Latent Dirichlet Allocation (BR_LDA) model. Through the content similarity analysis of historical borrowing data of the target borrowers with other books, other books which had high content similarities with historical borrowing books of the target borrowers were gotten. Through the similarity analyses performed on the target borrowers' historical borrowing data and historical data from other borrowers, historical borrowing data of the nearest neighbors were gotten. Books which the target borrowers were interested in could be finally gotten by calculating the probabilities of the recommended books. In particular, when the number of recommended books is 4000, the precision of BR_LDA model is 6.2% higher than multi-feature method and 4.5% higher than association rule method; when the recommended list has 500 items, the precision of BR_LDA model is 2.1% higher than collaborative filtering based on the nearest neighbors and 0.5% higher than collaborative filtering based on matrix decomposition. The experimental results show that this model can efficiently mine data of books, reasonably recommend new books which belong to historical interested categories and new books in potential interested categories to the target borrowers.
Reference | Related Articles | Metrics
Multi-Agent path planning algorithm based on hierarchical reinforcement learning and artificial potential field
ZHENG Yanbin, LI Bo, AN Deyu, LI Na
Journal of Computer Applications    2015, 35 (12): 3491-3496.   DOI: 10.11772/j.issn.1001-9081.2015.12.3491
Abstract792)      PDF (903KB)(803)       Save
Aiming at the problems of the path planning algorithm, such as slow convergence and low efficiency, a multi-Agent path planning algorithm based on hierarchical reinforcement learning and artificial potential field was proposed. Firstly, the multi-Agent operating environment was regarded as an artificial potential field, the potential energy of every point, which represented the maximal rewards obtained according to the optimal strategy, was determined by the priori knowledge. Then, the update process of strategy was limited to smaller local space or lower dimension of high-level space to enhance the performance of learning algorithm by using model learning without environment and partial update of hierarchical reinforcement learning. Finally, aiming at the problem of taxi, the simulation experiment of the proposed algorithm was done in grid environment. To close to the real environment and increase the portability of the algorithm, the proposed algorithm was verified in three-dimensional simulation environment. The experimental results show that the convergence speed of the algorithm is fast, and the convergence procedure is stable.
Reference | Related Articles | Metrics
Key salient object detection based on filtering integration method
WANG Chen FAN Yangyu LI Bo XIONG Lei
Journal of Computer Applications    2014, 34 (12): 3531-3535.  
Abstract271)      PDF (964KB)(645)       Save

Concerning the problem of the background interference during the salient object detection, a key salient object detection algorithm was proposed based on filtering integration in this paper. The proposed algorithm integrated the locally guided filtering with the improved DoG (Difference of Gaussia) filtering, and made the salient object more highlighted. Then, the key points set was determined by using the saliency map, and the result of saliency detection was got by adjustment factor, which was more suitable for human visual system. The experimental results show that the proposed algorithm is superior to existing significant detection methods. And it can restrain the background interference effectively, and have higher precision and better recall rate compared with other methods, such as Local Contrast (LC), Spectral Residual (SR), Histogram-based Contrast (HC), Region Contrast (RC) and Frequency-Tuned (FT).

Reference | Related Articles | Metrics
Validation method of security features in safety critical software requirements specification
WANG Fei GUO Yuanbo LI Bo HAO Yaohui
Journal of Computer Applications    2013, 33 (07): 2041-2045.   DOI: 10.11772/j.issn.1001-9081.2013.07.2041
Abstract814)      PDF (681KB)(558)       Save
Since the security features described by natural language in the safety-critical software requirements specification are of inaccuracy and inconsistence, a validation method of security features based on UMLsec was proposed. The method completed the UMLsec model by customizing stereotypes, tags and constraints for security features of the core class on the basis of class diagram and sequence diagram for UML requirements model. Afterwards, the support tool for designing and implementing UMLsec was used for automatic verification of security features. The experimental results show that the proposed method can accurately describe security features in the safety-critical requirements specification and can automatically verify whether the security features meet the security requirements.
Reference | Related Articles | Metrics
Optimum packing of rectangles based on heuristic dynamic decomposition algorithm
LI Bo WANG Shi SHI Songxin HU Junyong
Journal of Computer Applications    2013, 33 (07): 1908-1911.   DOI: 10.11772/j.issn.1001-9081.2013.07.1908
Abstract1071)      PDF (597KB)(504)       Save
To solve the optimum packing of two-dimensional rectangle layout problem, a heuristic dynamic decomposition algorithm was proposed, which can be used in the three-dimensional rectangule layout and global optimization problems. The container was orthogonally decomposed according to the emission rectangles, and the best sub-container was selected according to the degree of place coupling, then the state of all containers was updated by the interference relationship, so the large-scale and complex problem can be solved quickly and efficiently. The experimental results of the Bench-mark cases internationally recognized show that the proposed algorithm has better performance compared with similar algorithms, in which the layout utilization efficiency is increased by 9.4% and the calculating efficiency is improved up to 95.7%. Finally the algorithm has been applied to the commercialized packing software AutoCUT, and it has good application prospects.
Reference | Related Articles | Metrics
Medical image classification based on scale space multi-feature fusion
LI Bo CAO Peng LI Wei ZHAO Dazhe
Journal of Computer Applications    2013, 33 (04): 1108-1111.   DOI: 10.3724/SP.J.1087.2013.01108
Abstract817)      PDF (811KB)(506)       Save
In order to describe different kinds of medical image more consistently and reduce the scale sensitivity, a classification model based on scale space multi-feature fusion was proposed according to the characteristics of medical image. First, scale space was built by difference of Gaussian, and then complementary features were extracted, such as gray-scale features, texture features, shape features, and features extracted in the frequency domain. In addition, maximum likelihood estimation was considered to realize decision level fusion. The scale space multi-feature fusion classification model was applied to medical image classification task following IRMA code. The experimental results show that compared with traditional methods, F1 value increased 5%-20%. Fusion classification model describes medical image more comprehensively, avoids the information loss from feature dimension reduction, improves classification accuracy, and has clinical value.
Reference | Related Articles | Metrics
Imbalanced data learning based on particle swarm optimization
CAO Peng LI Bo LI Wei ZHAO Dazhe
Journal of Computer Applications    2013, 33 (03): 789-792.   DOI: 10.3724/SP.J.1087.2013.00789
Abstract1085)      PDF (630KB)(473)       Save
In order to improve the classification performance on the imbalanced data, a new Particle Swarm Optimization (PSO) based method was introduced. It optimized the re-sampling rate and selected the feature set simultaneously, with the imbalanced data evaluation metric as objective function through particle swarm optimization, so as to achieve the best data distribution. The proposed method was tested on a large number of UCI datasets and compared with the state-of-the-art methods. The experimental results show that the proposed method has substantial advantages over other methods; moreover, it proves that it can effectively improve the performance on the imbalanced data by optimizing the re-sampling rate and feature set simultaneously.
Reference | Related Articles | Metrics
Adaptive random subspace ensemble classification aided by X-means clustering
CAO Peng LI Bo LI Wei ZHAO Dazhe
Journal of Computer Applications    2013, 33 (02): 550-553.   DOI: 10.3724/SP.J.1087.2013.00550
Abstract997)      PDF (700KB)(402)       Save
To solve low accuracy and efficiency issues on the large-scale data classification, an adaptive random subspace ensemble classification algorithm aided by the X-means clustering was proposed. X-means clustering was adopted to separate the original data space into multiple clusters automatically, maintaining the original data structure; moreover adaptive random subspace ensemble classifier enhanced diversity of the base components and determined the size of base classifiers automatically, so as to improve the robustness and accuracy. The experimental results show that the proposed method improves the traditional single and ensemble classifiers with respect to accuracy and robustness on the large scale datasets with high dimension. Furthermore, it improves the overall efficiency of the algorithm.
Related Articles | Metrics
Cluster Head Extraction for Data Compression in Wireless Sensor Networks
LIN Wei LI Bo HAN Li-hong
Journal of Computer Applications    2012, 32 (12): 3482-3485.   DOI: 10.3724/SP.J.1087.2012.03482
Abstract852)      PDF (732KB)(454)       Save
Douglas-Peucker (DP) compression algorithm of vector data compression algorithm was introduced to wireless sensor networks, at the same time for the number of scans of the data compression process, the paper put forward an improved cluster head extraction for data compression algorithm, and the cluster head was called data cluster head. Cluster head extraction compression algorithm reduced the number of data scan in compression process by setting step, and used the optimum curve fitting method for monitoring data point to do linear optimization fitting, according to the attachment relationship of the data, and extracted the cluster head data that reflected the overall characteristics; meanwhile, the subgroups of non-cluster head data subgroups were divided. The simulation results show that, the process of cluster head extraction compression algorithm is simpler; for the large fluctuation data it has a better cluster head extraction effect; besides, it reduces the amount of network data transmission, and effectively saves the energy consumption across the network.
Related Articles | Metrics
Medical name entity recognition based on Bi-LSTM-CRF and attention mechanism
ZHANG Huali,KANG Xiaodong,LI Bo,WANG Yage,LIU Hanqing,BAI Fang
Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2019081371
Accepted: 11 October 2019

Adaptive computing optimization of sparse matrix-vector multiplication based on heterogeneous platforms
LI Bo, HUANG Jianqiang, HUANG Dongqiang, WANG Xiaoying
Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2023111707
Online available: 22 March 2024